โ† Back to Contents
Note: This page's design, presentation and content have been created and enhanced using Claude (Anthropic's AI assistant) to improve visual quality and educational experience.
Week 6 โ€ข Sub-Lesson 6

๐Ÿงช Hands-On: Activities & Assessment

Putting it all together โ€” experimenting with writing processes, prompt engineering for ideation, auditing AI output, and your weekly assessment

What We'll Cover

This final session of Week 6 is entirely practical. You will work through three activities that build on everything we have covered this week about AI-assisted writing, and we will introduce the weekly assessment. The activities are designed to make the differences between human writing and AI writing viscerally clear, to give you hands-on experience with structured prompting for research ideation, and to develop the auditing habits that protect your academic integrity.

By the end of this session, you will have first-hand evidence of how AI changes the writing process, a repertoire of prompting strategies for generating research ideas, and a practised workflow for auditing AI-assisted text before it enters your work.

โœ๏ธ Activity 1: The Writing Process Experiment

Objective

Experience the difference between writing and editing AI text first-hand. This is the most important exercise of the week because it makes the abstract debate about AI writing concrete and personal. You will discover something about your own writing process that no amount of reading about it can teach you.

Setup

  1. Choose a topic related to your research. It can be the same topic you used for Week 5 activities, or a different one. The topic should be something you know well enough to write about without consulting sources.
  2. Write a 300-word argument on that topic entirely by yourself. No AI. No templates. Set a timer โ€” you have 20 minutes. Write in whatever way feels natural to you, but aim for an argument with a clear position, not just a description of the topic.
  3. Now ask AI (ChatGPT, Claude, or Gemini) to write a 300-word argument on the same topic. Give it the same brief you gave yourself โ€” same topic, same word count, same instruction to take a clear position.
  4. Compare the two versions side by side. Print them out or place them in adjacent windows. Read each one carefully, then work through the reflection questions below.

What to Record

  • Which version is more polished โ€” smoother sentences, better transitions, fewer grammatical issues?
  • Which version is more "you" โ€” reflecting your actual thinking, your perspective, and the way you naturally express ideas?
  • Which version do you understand better and could defend more confidently if questioned at a seminar?
  • Which version contains ideas you hadn't considered? Did the AI introduce any genuinely new angles?
  • If you had to present one version at a seminar, which would you choose? Why?

๐Ÿ’ฌ Discussion

This exercise usually reveals that AI text is smoother but shallower. Your own text may be rougher but reflects genuine understanding. The AI version often reads like a well-constructed essay that could have been written about almost any topic โ€” competent but generic. Your version, even if it has awkward sentences or incomplete thoughts, typically contains something the AI cannot replicate: the specific insights that come from actually working in your field. This is the difference between writing-as-thinking and writing-as-performance.

๐Ÿ’ก Activity 2: Prompt Engineering for Ideation

Objective

Practice structured prompting for research ideation and observe how different strategies produce different results. A 2024 study by Si et al. found that LLM-generated research ideas were rated as significantly more novel than human expert ideas, though with suggestive (but not statistically significant) evidence of weaker feasibility, and notably less diversity across ideas. This exercise lets you test those findings against your own field.

Setup

  1. Choose a broad research area from your field. It should be broad enough that there are many possible research questions within it, but specific enough to be meaningful (e.g., "machine learning in healthcare" rather than just "AI").
  2. Round 1 โ€” Naive prompt: Ask AI: "Generate 10 research ideas about [your area]." Record all 10 ideas exactly as the AI produces them. Do not refine or follow up โ€” this is your baseline.
  3. Round 2 โ€” Chain-of-thought: Ask AI: "Think step by step about what gaps exist in current research on [your area], then generate 10 research ideas that address those gaps." Record all 10 ideas.
  4. Round 3 โ€” Persona-based: Ask AI: "You are a [specific expert type] with 20 years of experience in [adjacent field]. What research questions about [your area] would you find most interesting and worth investigating?" Record the response.
    • Choose a persona from a field adjacent to yours, not your own field โ€” this is what makes the cross-pollination interesting
    • For example, if you study ecology, try a data scientist persona; if you study education, try a behavioural economist
  5. Round 4 โ€” Constraint-based: Ask AI: "Generate 10 research ideas about [your area] that do NOT involve [the most obvious approach in your field]. Focus on underexplored methodologies or populations." Record all 10 ideas.
    • The constraint should exclude whatever the default or dominant methodology is in your area
    • For example: "that do NOT involve surveys" or "that do NOT use regression analysis" or "that do NOT focus on WEIRD populations"

What to Record

  • How did the ideas change across the 4 rounds? Was there a clear progression in quality or originality?
  • Which round produced the most surprising or novel ideas โ€” ones you genuinely had not considered before?
  • Which round produced the most feasible ideas โ€” ones that could realistically be pursued with available resources?
  • Did you notice repetition or convergence across rounds? Did the AI keep returning to the same themes regardless of the prompt?
  • Select your top 3 ideas from ALL rounds combined and justify your selection. What makes these three stand out?

๐Ÿ” Activity 3: The Audit Exercise

Objective

Practice auditing AI-assisted writing for accuracy, voice, and integrity. This exercise takes the audit techniques from Sub-Lesson 4 and applies them to a real piece of AI-generated text. The goal is to build muscle memory for the kind of critical reading that should become automatic whenever you work with AI-generated content.

Setup

  1. Take the AI-generated 300 words from Activity 1 (or, if you skipped Activity 1, ask AI to write 300 words arguing a position on a topic in your field).
  2. Apply each audit technique from Sub-Lesson 4 systematically. Work through them one at a time โ€” do not skip any, even if you think the text looks fine.
  3. Read aloud โ€” read the entire text out loud, slowly. Mark any sentences that don't sound natural or like something you would say. Pay attention to where you stumble, where the rhythm feels wrong, and where the language feels too generic or too formal for how you actually communicate.
  4. The "explain this paragraph" test โ€” cover the AI text and, for each paragraph, explain the argument from memory in your own words. If you cannot do this fluently, you do not fully understand or own that paragraph's argument.
  5. Check every factual claim โ€” identify every statement that makes a factual assertion (statistics, dates, claims about research findings, cause-and-effect statements). Verify each one against a reliable source. Note which claims are accurate, which are vague, and which are wrong.
  6. Internal consistency โ€” read the text as a whole. Does the argument hold together logically from start to finish? Are there contradictions? Does the conclusion actually follow from the evidence presented? Does each paragraph connect to the next?

What to Record

  • How many sentences did you flag when reading aloud? What patterns did you notice in the flagged sentences?
  • Could you explain every paragraph without looking? Which paragraphs were hardest to recall and why?
  • How many factual claims needed correction? Were the errors subtle or obvious?
  • What would you change to make this text genuinely yours โ€” not just corrected, but authentically representing your thinking and voice?

๐Ÿ“ Weekly Assessment

AI-Assisted Writing Sample (800 words)

This week's assessment asks you to write a substantial piece of academic text using AI tools, then document and reflect on the process with complete transparency. The assessment is designed to test not just the quality of the writing, but the quality of your engagement with the tools โ€” how thoughtfully you integrated AI into your process, and how critically you evaluated its contributions.

Requirements

  1. Choose a section of your research project to write: introduction, literature review, methods, or discussion. It should be a section where AI assistance is potentially useful. If you do not yet have a defined research project, choose a topic from your field and write the section as if it were part of a real paper.
  2. Write the section (800 words) using AI tools as you see fit. You may use any combination of tools and strategies from this week โ€” ChatGPT, Claude, Gemini, or any other AI writing tool. You may use AI for brainstorming, drafting, editing, restructuring, or any other purpose. There are no restrictions on how much or how little AI you use, but you must document everything honestly.
  3. Include a Process Log (minimum 200 words, does not count toward 800): Document exactly how AI contributed at each stage of your writing process. What prompts did you use? What did the AI produce? What did you accept, modify, or reject? Where did you write without AI assistance? Be specific โ€” include the actual prompts you used and describe the AI's responses.
    • Think of this as a lab notebook for your writing process
    • Vague statements like "I used AI for editing" are not sufficient โ€” describe what you asked, what it suggested, and what you did with those suggestions
  4. Include a Self-Audit (minimum 200 words, does not count toward 800): Identify at least 3 specific places where you corrected, overrode, or rejected AI suggestions. For each one, explain why you made that decision. What was wrong with the AI's suggestion? What did you replace it with and why?
    • Use the audit techniques from Sub-Lesson 4 and Activity 3
    • If you cannot find 3 places to correct or override, that itself is worth reflecting on โ€” did you engage critically enough with the AI output?
  5. Include a Disclosure Statement as you would for a journal submission โ€” use one of the templates from Sub-Lesson 5, adapted to describe your actual use of AI in producing this piece. The statement should be specific enough that a reader could understand exactly what role AI played in your writing.

Assessment Criteria:

Writing Quality (30%)

Coherent argument, appropriate academic voice, evidence of genuine understanding. Does it sound like a researcher who knows their field? We are looking for depth of thought, not just polish. A beautifully written piece that says nothing substantive will score lower than a rougher piece with genuine insight.

Process Log (25%)

Honest, detailed documentation of AI use. Shows thoughtful integration rather than wholesale delegation. The best process logs reveal a researcher who used AI strategically โ€” knowing when to lean on it and when to step back and think independently.

Self-Audit (25%)

Evidence of critical engagement with AI output. Quality of reasoning about what to accept and reject. We want to see that you can identify weaknesses in AI-generated text and articulate why your judgement is better than the AI's in specific instances.

Disclosure Statement (20%)

Appropriate, honest, and specific. Would this satisfy a journal's requirements? A good disclosure statement is neither defensive nor dismissive โ€” it simply and clearly describes what happened, following the frameworks covered in Sub-Lesson 5.

๐Ÿ“ค Submission

Upload to Amathuba by the deadline indicated on the activity. Include the process log, self-audit, and disclosure statement as clearly marked sections within your submission. The process log and self-audit do not count toward the 800-word limit for the writing sample itself.

Week 6 Summary & Key Takeaways

  • Writing is thinking โ€” outsourcing the writing process to AI risks outsourcing the intellectual development that makes you a researcher. The struggle to articulate your ideas is not a problem to be solved; it is the process through which understanding deepens.
  • AI can generate ideas rated as MORE novel than human ideas, but less feasible and less diverse โ€” use AI to expand your thinking, not replace it. The best research ideas come from the intersection of AI-generated possibilities and your domain expertise.
  • AI writing tools are powerful equalisers for non-native English speakers, but they homogenise writing toward Western, generic academic styles. Be aware of what you gain in fluency and what you may lose in distinctive voice and perspective.
  • Scientific integrity requires a spectrum of awareness โ€” from grammar fixes (low risk) to generating arguments (very high risk). Know where on that spectrum you are operating at every moment, and be honest about it.
  • Journal policies are converging on mandatory disclosure โ€” build the habit now. Every major publisher now requires some form of AI use declaration. Treating this as a routine part of your workflow, rather than an afterthought, protects you and strengthens the research community.
  • The principled workflow: think first, outline yourself, draft with AI assistance, audit thoroughly, revise in your own voice. This sequence matters โ€” starting with your own thinking ensures that AI augments rather than replaces your intellectual contribution.
  • The most important question: can you defend every sentence in your work? If someone challenged any claim, any phrasing, any argument in your text, could you explain why it is there and what it means? If not, more revision is needed.

Looking ahead: Next week, we move from writing to data โ€” AI for Data Analysis and Visualisation. We will explore how AI tools can help with statistical analysis, data cleaning, visualisation, and interpretation, with the same critical eye we have applied to literature review and writing.